116 research outputs found

    Barcoding of the cytochrome oxidase I (COI) indicates a recent introduction of Ciona savignyi into New Zealand and provides a rapid method for Ciona species discrimination

    Get PDF
    Mitochondrial cytochrome oxidase I (COI) gene sequencing (DNA barcoding) of Ciona specimens from New Zealand (NZ) led to the first record of the solitary ascidian Ciona savignyi in the Southern Hemisphere. We sought to quantify C. savignyi COI genetic diversity around the NZ archipelago and to compare this with diversity within C. savignyi's native range in the north-west Pacific. Ciona savignyi specimens were collected from two NZ sites and from three sites around Japan. COI sequences (595 bp) were amplified and measures of genetic diversity were calculated. Based on differences between their COI sequences we developed a PCR-based assay to distinguish C. savignyi from the morphologically similar C. intestinalis. A total of 12 C. savignyi COI haplotypes were recovered from the 76 samples. Of the four haplotypes observed in NZ, two were unique. From the 10 haplotypes observed in the Japan samples, eight were unique. The C. savignyi populations in Japan were found to contain higher haplotype diversity when compared with those in NZ. The NZ samples contained only a small subset of the haplotype variation of the Japan samples, however, NZ samples did harbor two haplotypes not observed in the Japan samples. A PCR-based assay developed from the COI sequences was able to reliably discriminate the two Ciona species. The low COI genetic diversity within the two NZ C. savignyi populations sampled is consistent with a founder effect associated loss of genetic diversity. The robust PCR-based assay for distinguishing C. savignyi and C. intestinalis may find application in ecological and taxonomic studies and can be applied to both archival materials and live animals

    Error bars in experimental biology

    Get PDF
    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars

    Confidence Intervals Permit, but Do Not Guarantee, Better Inference than Statistical Significance Testing

    Get PDF
    A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST), or confidence intervals (CIs). Authors of articles published in psychology, behavioral neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST

    Researchers Should Make Thoughtful Assessments Instead of Null-Hypothesis Significance Tests

    Get PDF
    Null-hypothesis significance tests (NHSTs) have received much criticism, especially during the last two decades. Yet, many behavioral and social scientists are unaware that NHSTs have drawn increasing criticism, so this essay summarizes key criticisms. The essay also recommends alternative ways of assessing research findings. Although these recommendations are not complex, they do involve ways of thinking that many behavioral and social scientists find novel. Instead of making NHSTs, researchers should adapt their research assessments to specific contexts and specific research goals, and then explain their rationales for selecting assessment indicators. Researchers should show the substantive importance of findings by reporting effect sizes and should acknowledge uncertainty by stating confidence intervals. By comparing data with naĂŻve hypotheses rather than with null hypotheses, researchers can challenge themselves to develop better theories. Parsimonious models are easier to understand and they generalize more reliably. Robust statistical methods tolerate deviations from assumptions about samples

    Short-course antiretroviral therapy in primary HIV infection

    Get PDF
    Background Short-course antiretroviral therapy (ART) in primary human immunodeficiency virus (HIV) infection may delay disease progression but has not been adequately evaluated. Methods We randomly assigned adults with primary HIV infection to ART for 48 weeks, ART for 12 weeks, or no ART (standard of care), with treatment initiated within 6 months after seroconversion. The primary end point was a CD4+ count of less than 350 cells per cubic millimeter or long-term ART initiation. Results A total of 366 participants (60% men) underwent randomization to 48-week ART (123 participants), 12-week ART (120), or standard care (123), with an average followup of 4.2 years. The primary end point was reached in 50% of the 48-week ART group, as compared with 61% in each of the 12-week ART and standard-care groups. The average hazard ratio was 0.63 (95% confidence interval [CI], 0.45 to 0.90; P = 0.01) for 48-week ART as compared with standard care and was 0.93 (95% CI, 0.67 to 1.29; P = 0.67) for 12-week ART as compared with standard care. The proportion of participants who had a CD4+ count of less than 350 cells per cubic millimeter was 28% in the 48-week ART group, 40% in the 12-week group, and 40% in the standard-care group. Corresponding values for long-term ART initiation were 22%, 21%, and 22%. The median time to the primary end point was 65 weeks (95% CI, 17 to 114) longer with 48-week ART than with standard care. Post hoc analysis identified a trend toward a greater interval between ART initiation and the primary end point the closer that ART was initiated to estimated seroconversion (P = 0.09), and 48-week ART conferred a reduction in the HIV RNA level of 0.44 log10 copies per milliliter (95% CI, 0.25 to 0.64) 36 weeks after the completion of short-course therapy. There were no significant between-group differences in the incidence of the acquired immunodeficiency syndrome, death, or serious adverse events. Conclusions A 48-week course of ART in patients with primary HIV infection delayed disease progression, although not significantly longer than the duration of the treatment. There was no evidence of adverse effects of ART interruption on the clinical outcome. (Funded by the Wellcome Trust; SPARTAC Controlled-Trials.com number, ISRCTN76742797, and EudraCT number, 2004-000446-20.

    Eliciting group judgements about replicability: a technical implementation of the IDEA Protocol

    Get PDF
    In recent years there has been increased interest in replicating prior research. One of the biggest challenges to assessing replicability is the cost in resources and time that it takes to repeat studies. Thus there is an impetus to develop rapid elicitation protocols that can, in a practical manner, estimate the likelihood that research findings will successfully replicate. We employ a novel implementation of the IDEA (‘Investigate’, ‘Discuss’, ‘Estimate’ and ‘Aggregate) protocol, realised through the repliCATS platform. The repliCATS platform is designed to scalably elicit expert opinion about replicability of social and behavioural science research. The IDEA protocol provides a structured methodology for eliciting judgements and reasoning from groups. This paper describes the repliCATS platform as a multi-user cloud-based software platform featuring (1) a technical implementation of the IDEA protocol for eliciting expert opinion on research replicability, (2) capture of consent and demographic data, (3) on-line training on replication concepts, and (4) exporting of completed judgements. The platform has, to date, evaluated 3432 social and behavioural science research claims from 637 participants

    Better Together: Reliable Application of the Post-9/11 and Post-Iraq US Intelligence Tradecraft Standards Requires Collective Analysis

    Get PDF
    Background: The events of 9/11 and the October 2002 National Intelligence Estimate on Iraq’s Continuing Programs for Weapons of Mass Destruction precipitated fundamental changes within the United States Intelligence Community. As part of the reform, analytic tradecraft standards were revised and codified into a policy document – Intelligence Community Directive (ICD) 203 – and an analytic ombudsman was appointed in the newly created Office for the Director of National Intelligence to ensure compliance across the intelligence community. In this paper we investigate the untested assumption that the ICD203 criteria can facilitate reliable evaluations of analytic products.Methods: Fifteen independent raters used a rubric based on the ICD203 criteria to assess the quality of reasoning of 64 analytical reports generated in response to hypothetical intelligence problems. We calculated the intra-class correlation coefficients for single and group-aggregated assessments.Results: Despite general training and rater calibration, the reliability of individual assessments was poor. However, aggregate ratings showed good to excellent reliability.Conclusion: Given that real problems will be more difficult and complex than our hypothetical case studies, we advise that groups of at least three raters are required to obtain reliable quality control procedures for intelligence products. Our study sets limits on assessment reliability and provides a basis for further evaluation of the predictive validity of intelligence reports generated in compliance with the tradecraft standards

    Better together: reliable application of the post-9/11 and post-Iraq US intelligence tradecraft standards requires collective analysis

    Get PDF
    Background: The events of 9/11 and the October 2002 National Intelligence Estimate on Iraq's Continuing Programs for Weapons of Mass Destruction precipitated fundamental changes within the US Intelligence Community. As part of the reform, analytic tradecraft standards were revised and codified into a policy document – Intelligence Community Directive (ICD) 203 – and an analytic ombudsman was appointed in the newly created Office for the Director of National Intelligence to ensure compliance across the intelligence community. In this paper we investigate the untested assumption that the ICD203 criteria can facilitate reliable evaluations of analytic products. Method: Fifteen independent raters used a rubric based on the ICD203 criteria to assess the quality of reasoning of 64 analytical reports generated in response to hypothetical intelligence problems. We calculated the intra-class correlation coefficients for single and group-aggregated assessments. Results: Despite general training and rater calibration, the reliability of individual assessments was poor. However, aggregate ratings showed good to excellent reliability. Conclusions: Given that real problems will be more difficult and complex than our hypothetical case studies, we advise that groups of at least three raters are required to obtain reliable quality control procedures for intelligence products. Our study sets limits on assessment reliability and provides a basis for further evaluation of the predictive validity of intelligence reports generated in compliance with the tradecraft standards
    • 

    corecore